Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move check_dummy_inputs_allowed to common export utils #2114

Merged
merged 9 commits into from
Dec 19, 2024

Conversation

eaidova
Copy link
Contributor

@eaidova eaidova commented Dec 4, 2024

What does this PR do?

check_dummy_inputs_allowed is helpful function for any optimum that involves pytorch models conversion (e.g. openvino). Current place of it import trigger initialization of full onnx configuration for other backends that wish to reuse it and it may lead to some troubles with resolving dependencies. I suggest to move it to common utilities for avoiding unexpected errors like represented bellow:

Example of logs (issue visible on windows with onnx>1.16.2 and related to onnx/onnx#6267):

  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\intel\openvino\__init__.py", line 51, in <module>
    from .quantization import OVQuantizer
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\intel\openvino\quantization.py", line 47, in <module>
    from optimum.exporters.onnx.convert import check_dummy_inputs_are_allowed
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\exporters\__init__.py", line 16, in <module>
    from .tasks import TasksManager  # noqa
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\exporters\tasks.py", line 173, in <module>
    class TasksManager:
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\exporters\tasks.py", line 338, in TasksManager
    "clip-text-model": supported_tasks_mapping(
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\exporters\tasks.py", line 117, in supported_tasks_mapping
    importlib.import_module(f"optimum.exporters.{backend}.model_configs"), config_cls_name
  File "C:\Users\chuquanc\packages\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\optimum\exporters\onnx\model_configs.py", line 23, in <module>
    from ...onnx import merge_decoders
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\transformers\utils\import_utils.py", line 1766, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "C:\Users\chuquanc\openvino\py310\lib\site-packages\transformers\utils\import_utils.py", line 1780, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import optimum.onnx.graph_transformations because of the following error (look up to see its traceback):
DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed.

workaround for this problem also proposed on optimum-intel side huggingface/optimum-intel#1048

@eaidova
Copy link
Contributor Author

eaidova commented Dec 4, 2024

@echarlaix @IlyasMoutawwakil could you please review?

Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ! Thanks !

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

github-merge-queue bot pushed a commit to openvinotoolkit/openvino.genai that referenced this pull request Dec 5, 2024
together with huggingface/optimum-intel#1048 and
huggingface/optimum#2114 allow me run wwb with
already exported models on windows platform
Comment on lines -30 to -39
import onnx
from transformers.utils import is_accelerate_available, is_torch_available

from ...onnx import remove_duplicate_weights_from_tied_info


if is_torch_available():
import torch.nn as nn

from ...onnx import merge_decoders
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would prefer to keep import there, why moving ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merge_decoders imported from graph_transformations submodule that contains onnx import
https://github.com/huggingface/optimum/blob/main/optimum/onnx/graph_transformations.py#L19

as this file heavily depends on onnx functionality, while merge decoders used only for specific configs postprocessing triggered only if specific config behaviour is selected, I think it may be better to allow use other functional from this module without necessarily to use onnx import.

As I said there is case, when import onnx crashed on windows. This file contains basic functionality for configs that we also reuse for enabling new models in optimum-intel moving this import allow us continue usage of base config classes even if onnx broken

@echarlaix echarlaix merged commit 35d35bd into huggingface:main Dec 19, 2024
22 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants